Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sci Rep ; 14(1): 5509, 2024 03 06.
Artigo em Inglês | MEDLINE | ID: mdl-38448517

RESUMO

Urban gas pipelines pose significant risks to public safety and infrastructure integrity, necessitating thorough risk assessment methodologies to mitigate potential hazards. This study investigates the dynamics of population distribution, demographic characteristics, and building structures to assess the risk associated with gas pipelines. Using geospatial analysis techniques, we analyze population distribution patterns during both day and night periods. Additionally, we conduct an in-depth vulnerability assessment considering multiple criteria maps, highlighting areas of heightened vulnerability in proximity to gas pipelines and older buildings. This study incorporated the concept of individual risk and the intrinsic parameters of gas pipelines to develop a hazard map. Hazard analysis identifies areas with elevated risks, particularly around main pipeline intersections and high-pressure zones. Integrating hazard and vulnerability assessments, we generate risk maps for both day and night periods, providing valuable insights into spatial risk distribution dynamics. The findings underscore the importance of considering temporal variations in risk assessment and integrating demographic and structural factors into hazard analysis for informed decision-making in pipeline management and safety measures.

2.
Sci Rep ; 14(1): 4275, 2024 02 21.
Artigo em Inglês | MEDLINE | ID: mdl-38383597

RESUMO

The challenge of making flexible, standard, and early medical diagnoses is significant. However, some limitations are not fully overcome. First, the diagnosis rules established by medical experts or learned from a trained dataset prove static and too general. It leads to decisions that lack adaptive flexibility when finding new circumstances. Secondly, medical terminological interoperability is highly critical. It increases realism and medical progress and avoids isolated systems and the difficulty of data exchange, analysis, and interpretation. Third, criteria for diagnosis are often heterogeneous and changeable. It includes symptoms, patient history, demographic, treatment, genetics, biochemistry, and imaging. Symptoms represent a high-impact indicator for early detection. It is important that we deal with these symptoms differently, which have a great relationship with semantics, vary widely, and have linguistic information. This negatively affects early diagnosis decision-making. Depending on the circumstances, the diagnosis is made solo on imaging and some medical tests. In this case, although the accuracy of the diagnosis is very high, can these decisions be considered an early diagnosis or prove the condition is deteriorating? Our contribution in this paper is to present a real medical diagnostic system based on semantics, fuzzy, and dynamic decision rules. We attempt to integrate ontology semantics reasoning and fuzzy inference. It promotes fuzzy reasoning and handles knowledge representation problems. In complications and symptoms, ontological semantic reasoning improves the process of evaluating rules in terms of interpretability, dynamism, and intelligence. A real-world case study, ADNI, is presented involving the field of Alzheimer's disease (AD). The proposed system has indicated the possibility of the system to diagnose AD with an accuracy of 97.2%, 95.4%, 94.8%, 93.1%, and 96.3% for AD, LMCI, EMCI, SMC, and CN respectively.


Assuntos
Doença de Alzheimer , Semântica , Humanos , Lógica Fuzzy , Linguística , Resolução de Problemas
4.
Sci Rep ; 13(1): 16336, 2023 09 28.
Artigo em Inglês | MEDLINE | ID: mdl-37770490

RESUMO

Alzheimer's disease (AD) is the most common form of dementia. Early and accurate detection of AD is crucial to plan for disease modifying therapies that could prevent or delay the conversion to sever stages of the disease. As a chronic disease, patient's multivariate time series data including neuroimaging, genetics, cognitive scores, and neuropsychological battery provides a complete profile about patient's status. This data has been used to build machine learning and deep learning (DL) models for the early detection of the disease. However, these models still have limited performance and are not stable enough to be trusted in real medical settings. Literature shows that DL models outperform classical machine learning models, but ensemble learning has proven to achieve better results than standalone models. This study proposes a novel deep stacking framework which combines multiple DL models to accurately predict AD at an early stage. The study uses long short-term memory (LSTM) models as base models over patient's multivariate time series data to learn the deep longitudinal features. Each base LSTM classifier has been optimized using the Bayesian optimizer using different feature sets. As a result, the final optimized ensembled model employed heterogeneous base models that are trained on heterogeneous data. The performance of the resulting ensemble model has been explored using a cohort of 685 patients from the University of Washington's National Alzheimer's Coordinating Center dataset. Compared to the classical machine learning models and base LSTM classifiers, the proposed ensemble model achieves the highest testing results (i.e., 82.02, 82.25, 82.02, and 82.12 for accuracy, precision, recall, and F1-score, respectively). The resulting model enhances the performance of the state-of-the-art literature, and it could be used to build an accurate clinical decision support tool that can assist domain experts for AD progression detection.


Assuntos
Doença de Alzheimer , Disfunção Cognitiva , Humanos , Fatores de Tempo , Teorema de Bayes , Disfunção Cognitiva/diagnóstico , Neuroimagem/métodos , Doença de Alzheimer/diagnóstico por imagem , Computadores
5.
PLoS One ; 18(5): e0285455, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37167226

RESUMO

This study aims to predict head trauma outcome for Neurosurgical patients in children, adults, and elderly people. As Machine Learning (ML) algorithms are helpful in healthcare field, a comparative study of various ML techniques is developed. Several algorithms are utilized such as k-nearest neighbor, Random Forest (RF), C4.5, Artificial Neural Network, and Support Vector Machine (SVM). Their performance is assessed using anonymous patients' data. Then, a proposed double classifier based on Henry Gas Solubility Optimization (HGSO) is developed with Aquila optimizer (AQO). It is implemented for feature selection to classify patients' outcome status into four states. Those are mortality, morbidity, improved, or the same. The double classifiers are evaluated via various performance metrics including recall, precision, F-measure, accuracy, and sensitivity. Another contribution of this research is the original use of hybrid technique based on RF-SVM and HGSO to predict patient outcome status with high accuracy. It determines outcome status relationship with age and mode of trauma. The algorithm is tested on more than 1000 anonymous patients' data taken from a Neurosurgical unit of Mansoura International Hospital, Egypt. Experimental results show that the proposed method has the highest accuracy of 99.2% (with population size = 30) compared with other classifiers.


Assuntos
Algoritmos , Aprendizado de Máquina , Adulto , Criança , Humanos , Idoso , Solubilidade , Redes Neurais de Computação , Algoritmo Florestas Aleatórias , Máquina de Vetores de Suporte
6.
Comput Methods Programs Biomed ; 234: 107495, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-37003039

RESUMO

BACKGROUND AND OBJECTIVES: Parkinson's Disease (PD) is a devastating chronic neurological condition. Machine learning (ML) techniques have been used in the early prediction of PD progression. Fusion of heterogeneous data modalities proved its capability to improve the performance of ML models. Time series data fusion supports the tracking of the disease over time. In addition, the trustworthiness of the resulting models is improved by adding model explainability features. The literature on PD has not sufficiently explored these three points. METHODS: In this work, we proposed an ML pipeline for predicting the progression of PD that is both accurate and explainable. We explore the fusion of different combinations of five time series modalities from the Parkinson's Progression Markers Initiative (PPMI) real-world dataset, including patient characteristics, biosamples, medication history, motor, and non-motor function data. Each patient has six visits. The problem has been formulated in two ways: ❶ a three-class based progression prediction with 953 patients in each time series modality, and ❷ a four-class based progression prediction with 1,060 patients in each time series modality. The statistical features of these six visits were calculated from each modality and diverse feature selection methods were applied to select the most informative feature sets. The extracted features were used to train a set of well-known ML models including Support vector machines (SVM), random forests (RF), extra tree classifier (ETC), light gradient boosting machines (LGBM), and stochastic gradient descent (SGD). We examined a number of data-balancing strategies in the pipeline with different combinations of modalities. ML models have been optimized using the Bayesian optimizer. A comprehensive evaluation of various ML methods has been conducted, and the best models have been extended to provide different explainability features. RESULTS: We compare the performance of ML models before and after optimization and using and without using feature selection. In the three-class experiment and with various modality fusions, the LGBM model produced the most accurate results with a 10-fold cross-validation (10-CV) accuracy of 90.73% using non-motor function modality. RF produced the best results in the four-class experiment with various modality fusions with a 10-CV accuracy of 94.57% using non-motor modality. With the fused dataset of non-motor and motor function modalities, the LGBM model outperformed the other ML models in both the 3-class and 4-class experiments (i.e., 10-CV accuracy of 94.89% and 93.73%, respectively). Using the Shapely Additive Explanations (SHAP) framework, we employed global and instance-based explanations to explain the behavior of each ML classifier. Moreover, we extended the explainability by implementing the LIME and SHAPASH local explainers. The consistency of these explainers has been explored. The resultant classifiers were accurate, explainable, and thus medically more relevant and applicable. CONCLUSIONS: The select modalities and feature sets were confirmed by the literature and medical experts. The various explainers suggest that the bradykinesia (NP3BRADY) feature was the most dominant and consistent. By providing thorough insights into the influence of multiple modalities on the disease risk, the suggested approach is expected to help improve the clinical knowledge of PD progression processes.


Assuntos
Doença de Parkinson , Humanos , Doença de Parkinson/diagnóstico , Teorema de Bayes , Fatores de Tempo , Aprendizado de Máquina , Algoritmo Florestas Aleatórias
7.
Sci Rep ; 13(1): 791, 2023 01 16.
Artigo em Inglês | MEDLINE | ID: mdl-36646735

RESUMO

Automated multi-organ segmentation plays an essential part in the computer-aided diagnostic (CAD) of chest X-ray fluoroscopy. However, developing a CAD system for the anatomical structure segmentation remains challenging due to several indistinct structures, variations in the anatomical structure shape among different individuals, the presence of medical tools, such as pacemakers and catheters, and various artifacts in the chest radiographic images. In this paper, we propose a robust deep learning segmentation framework for the anatomical structure in chest radiographs that utilizes a dual encoder-decoder convolutional neural network (CNN). The first network in the dual encoder-decoder structure effectively utilizes a pre-trained VGG19 as an encoder for the segmentation task. The pre-trained encoder output is fed into the squeeze-and-excitation (SE) to boost the network's representation power, which enables it to perform dynamic channel-wise feature calibrations. The calibrated features are efficiently passed into the first decoder to generate the mask. We integrated the generated mask with the input image and passed it through a second encoder-decoder network with the recurrent residual blocks and an attention the gate module to capture the additional contextual features and improve the segmentation of the smaller regions. Three public chest X-ray datasets are used to evaluate the proposed method for multi-organs segmentation, such as the heart, lungs, and clavicles, and single-organ segmentation, which include only lungs. The results from the experiment show that our proposed technique outperformed the existing multi-class and single-class segmentation methods.


Assuntos
Aprendizado Profundo , Humanos , Raios X , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Tórax/diagnóstico por imagem
8.
J Biomed Inform ; 135: 104216, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-36208833

RESUMO

Robust and rabid mortality prediction is crucial in intensive care units because it is considered one of the critical steps for treating patients with serious conditions. Combining mortality prediction with the length of stay (LoS) prediction adds another level of importance to these models. No studies in the literature predict such tasks for neonates, especially using time-series data and dynamic ensemble techniques. Dynamic ensembles are novel techniques that dynamically select the base classifiers for each new case. Medically, implementing an accurate machine learning model is insufficient to gain the trust of physicians. The model must be able to justify its decisions. While explainable AI (XAI) techniques can be used to handle this challenge, no studies have been done in this regard for neonate monitoring in the neonatal intensive care unit (NICU). This study utilizes advanced machine learning approaches to predict mortality and LoS through data-driven learning. We propose a multilayer dynamic ensemble-based model to predict mortality as a classification task and LoS as a regression task for neonates admitted to the NICU. The model has been built based on the patient's time-series data of the first 24 h in the NICU. We utilized a cohort of 3,133 infants from the MIMIC-III real dataset to build and optimize the selected algorithms. It has shown that the dynamic ensemble models achieved better results than other classifiers, and static ensemble regressors achieved better results than classical machine learning regressors. The proposed optimized model is supported by three well-known explainability techniques of SHAP, decision tree visualization, and rule-based system. To provide online assistance to physicians in monitoring and managing neonates in the NICU, we implemented a web-based clinical decision support system based on the most accurate models and selected XAI techniques. The code of the proposed models is publicly available at https://github.com/InfoLab-SKKU/neonateMortalityPrediction.


Assuntos
Algoritmos , Aprendizado de Máquina , Recém-Nascido , Humanos , Unidades de Terapia Intensiva , Unidades de Terapia Intensiva Neonatal , Tempo de Internação
9.
IEEE J Biomed Health Inform ; 26(12): 5793-5804, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-36037451

RESUMO

In a hospital, accurate and rapid mortality prediction of Length of Stay (LOS) is essential since it is one of the essential measures in treating patients with severe diseases. When predictions of patient mortality and readmission are combined, these models gain a new level of significance. Therefore, the most expensive components of patient care are LOS and readmission rates. Several studies have assessed readmission to the hospital as a single-task issue. The performance, robustness, and stability of the model increase when many correlated tasks are optimized. This study develops multimodal multitasking Long Short-Term Memory (LSTM) Deep Learning (DL) model that can predict both LOS and readmission for patients using multi-sensory data from 47 patients. Continuous sensory data is divided into eight sections, each of which is recorded for an hour. The time steps are constructed using a dual 10-second window-based technique, resulting in six steps per hour. The 30 statistical features are computed by transforming the sensory input into the resulting vector. The proposed multitasking model predicts 30-day readmission as a binary classification problem and LOS as a regression task by constructing discrete time-step data based on the length of physical activity during a hospital stay. The proposed model is compared to a random forest for a single-task problem (classification or regression) because typical machine learning algorithms are unable to handle the multitasking challenge. In addition, sensory data combined with other cost-effective modalities such as demographics, laboratory tests, and comorbidities to construct reliable models for personalized, cost-effective, and medically acceptable prediction. With a high accuracy of 94.84%, the proposed multitask multimodal DL model classifies the patient's readmission status and determines the patient's LOS in hospital with a minimal Mean Square Error (MSE) of 0.025 and Root Mean Square Error (RMSE) of 0.077, which is promising, effective, and trustworthy.


Assuntos
Aprendizado Profundo , Humanos , Tempo de Internação , Readmissão do Paciente , Análise Custo-Benefício , Aprendizado de Máquina
10.
Comput Intell Neurosci ; 2021: 8439655, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34603436

RESUMO

Early detection of Alzheimer's disease (AD) progression is crucial for proper disease management. Most studies concentrate on neuroimaging data analysis of baseline visits only. They ignore the fact that AD is a chronic disease and patient's data are naturally longitudinal. In addition, there are no studies that examine the effect of dementia medicines on the behavior of the disease. In this paper, we propose a machine learning-based architecture for early progression detection of AD based on multimodal data of AD drugs and cognitive scores data. We compare the performance of five popular machine learning techniques including support vector machine, random forest, logistic regression, decision tree, and K-nearest neighbor to predict AD progression after 2.5 years. Extensive experiments are performed using an ADNI dataset of 1036 subjects. The cross-validation performance of most algorithms has been improved by fusing the drugs and cognitive scores data. The results indicate the important role of patient's taken drugs on the progression of AD disease.


Assuntos
Doença de Alzheimer , Disfunção Cognitiva , Algoritmos , Doença de Alzheimer/tratamento farmacológico , Análise de Dados , Humanos , Aprendizado de Máquina , Imageamento por Ressonância Magnética , Neuroimagem , Máquina de Vetores de Suporte
11.
Diagnostics (Basel) ; 11(7)2021 Jun 24.
Artigo em Inglês | MEDLINE | ID: mdl-34202587

RESUMO

Since December 2019, the global health population has faced the rapid spreading of coronavirus disease (COVID-19). With the incremental acceleration of the number of infected cases, the World Health Organization (WHO) has reported COVID-19 as an epidemic that puts a heavy burden on healthcare sectors in almost every country. The potential of artificial intelligence (AI) in this context is difficult to ignore. AI companies have been racing to develop innovative tools that contribute to arm the world against this pandemic and minimize the disruption that it may cause. The main objective of this study is to survey the decisive role of AI as a technology used to fight against the COVID-19 pandemic. Five significant applications of AI for COVID-19 were found, including (1) COVID-19 diagnosis using various data types (e.g., images, sound, and text); (2) estimation of the possible future spread of the disease based on the current confirmed cases; (3) association between COVID-19 infection and patient characteristics; (4) vaccine development and drug interaction; and (5) development of supporting applications. This study also introduces a comparison between current COVID-19 datasets. Based on the limitations of the current literature, this review highlights the open research challenges that could inspire the future application of AI in COVID-19.

12.
Entropy (Basel) ; 22(8)2020 Jul 30.
Artigo em Inglês | MEDLINE | ID: mdl-33286611

RESUMO

Spatial modulation (SM) is a multiple-input multiple-output (MIMO) technique that achieves a MIMO capacity by conveying information through antenna indices, while keeping the transmitter as simple as that of a single-input system. Quadrature SM (QSM) expands the spatial dimension of the SM into in-phase and quadrature dimensions, which are used to transmit the real and imaginary parts of a signal symbol, respectively. A parallel QSM (PQSM) was recently proposed to achieve more gain in the spectral efficiency. In PQSM, transmit antennas are split into parallel groups, where QSM is performed independently in each group using the same signal symbol. In this paper, we analytically model the asymptotic pairwise error probability of the PQSM. Accordingly, the constellation design for the PQSM is formulated as an optimization problem of the sum of multivariate functions. We provide the proposed constellations for several values of constellation size, number of transmit antennas, and number of receive antennas. The simulation results show that the proposed constellation outperforms the phase-shift keying (PSK) constellation by more than 10 dB and outperforms the quadrature-amplitude modulation (QAM) schemes by approximately 5 dB for large constellations and number of transmit antennas.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...